10,150 research outputs found

    Personalization of Saliency Estimation

    Full text link
    Most existing saliency models use low-level features or task descriptions when generating attention predictions. However, the link between observer characteristics and gaze patterns is rarely investigated. We present a novel saliency prediction technique which takes viewers' identities and personal traits into consideration when modeling human attention. Instead of only computing image salience for average observers, we consider the interpersonal variation in the viewing behaviors of observers with different personal traits and backgrounds. We present an enriched derivative of the GAN network, which is able to generate personalized saliency predictions when fed with image stimuli and specific information about the observer. Our model contains a generator which generates grayscale saliency heat maps based on the image and an observer label. The generator is paired with an adversarial discriminator which learns to distinguish generated salience from ground truth salience. The discriminator also has the observer label as an input, which contributes to the personalization ability of our approach. We evaluate the performance of our personalized salience model by comparison with a benchmark model along with other un-personalized predictions, and illustrate improvements in prediction accuracy for all tested observer groups

    WAYLA - Generating Images from Eye Movements

    Full text link
    We present a method for reconstructing images viewed by observers based only on their eye movements. By exploring the relationships between gaze patterns and image stimuli, the "What Are You Looking At?" (WAYLA) system learns to synthesize photo-realistic images that are similar to the original pictures being viewed. The WAYLA approach is based on the Conditional Generative Adversarial Network (Conditional GAN) image-to-image translation technique of Isola et al. We consider two specific applications - the first, of reconstructing newspaper images from gaze heat maps, and the second, of detailed reconstruction of images containing only text. The newspaper image reconstruction process is divided into two image-to-image translation operations, the first mapping gaze heat maps into image segmentations, and the second mapping the generated segmentation into a newspaper image. We validate the performance of our approach using various evaluation metrics, along with human visual inspection. All results confirm the ability of our network to perform image generation tasks using eye tracking data

    Coupled aerodynamic and acoustical predictions for turboprops

    Get PDF
    To predict the noise fields for proposed turboprop airplanes, an existing turboprop noise code by Farassat has been modified to accept blade pressure inputs from a three-dimensional aerodynamic code. A Euler-type code can handle the nonlinear transonic flow of these high-speed, highly swept blades. This turbofan code was modified to allow the calculation mesh to extend to about twice the blade radius and to apply circumferential periodicity rather than solid-wall boundary conditions on the blade in the region between the blade tip and the outer shroud. Outputs were added for input to the noise prediction program and for color contour plots of various flow variables. The Farassat input subroutines were modified to read files of blade coordinates and predicted surface pressures. Aerodynamic and acoustic results are shown for the SR-3 model blade. Comparison of the acoustic predicted results with measured data show good agreement

    Immunological Approaches to Load Balancing in MIMD Systems

    Full text link
    Effective utilization of Multiple-Instruction-Multiple-Data (MIMD) parallel computers requires the application of good load balancing techniques. In this paper we show that heuristics derived from observation of complex natural systems, such as the mammalian immune system, can lead to effective load balancing strategies. In particular, the immune system processes of regulation, suppression, tolerance, and memory are seen to be powerful load balancing mechanisms. We provide a detailed example of our approach applied to parallelization of an image processing task, that of extracting the circuit design from the images of the layers of a CMOS integrated circuit. The results of this experiment show that good speedup characteristics can be obtained when using immune system derived load balancing strategies.Comment: The work described in this paper was done between 1990-2001, and was not published at that tim

    Remote measurement of turbidity and chlorophyll through aerial photography

    Get PDF
    Studies were conducted utilizing six different film and filter combinations to quantitatively detect chlorophyll and turbidity in six farm ponds. The low range of turbidity from 0-35 JTU correlated well with the density readings from the green band of normal color film and the high range above 35 JTU was found to correlate with density readings in the red band of color infrared film. The effect of many of the significant variables can be reduced by using standardized procedures in taking the photography. Attempts to detect chlorophyll were masked by the turbidity. The ponds which were highly turbid also had high chlorophyll concentrations; whereas, the ponds with low turbidity also had low chlorophyll concentrations. This prevented a direct correlation for this parameter. Several suggested approaches are cited for possible future investigations

    An Ontology-based Image Repository for a Biomedical Research Lab

    Get PDF
    We have developed a prototype web-based database for managing images acquired during experiments in a biomedical research lab studying the factors controlling cataract development. Based on an evolving ontology we are developing for describing the experimental data and protocols used in the lab, the image repository allows lab members to organize image data by multiple attributes. The use of an ontology for developing this and other tools will facilitate intercommunication among tools, and eventual data sharing with other researchers

    Target Features Affect Visual Search, A Study of Eye Fixations

    Full text link
    Visual Search is referred to the task of finding a target object among a set of distracting objects in a visual display. In this paper, based on an independent analysis of the COCO-Search18 dataset, we investigate how the performance of human participants during visual search is affected by different parameters such as the size and eccentricity of the target object. We also study the correlation between the error rate of participants and search performance. Our studies show that a bigger and more eccentric target is found faster with fewer number of fixations. Our code for the graphics are publicly available at https://github.com/ManooshSamiei/COCOSearch18_Analysis.Comment: 5 pages, 3 figure
    corecore